Superfast Second-Order Methods for Unconstrained Convex Optimization
نویسندگان
چکیده
Abstract In this paper, we present new second-order methods with convergence rate $$O\left( k^{-4}\right) $$ O k - 4 , where k is the iteration counter. This faster than existing lower bound for type of schemes (Agarwal and Hazan in Proceedings 31st conference on learning theory, PMLR, pp. 774–792, 2018; Arjevani Shiff Math Program 178(1–2):327–360, 2019), which k^{-7/2} \right) 7 / 2 . Our progress can be explained by a finer specification problem class. The main idea approach consists implementation third-order scheme from Nesterov (Math 186:157–183, 2021) using oracle. At each our method, solve nontrivial auxiliary linearly convergent based relative non-degeneracy condition (Bauschke et al. Oper Res 42:330–348, 2016; Lu SIOPT 28(1):333–354, 2018). During process, Hessian objective function computed once, gradient \ln {1 \over \epsilon }\right) ln 1 ? times, $$\epsilon desired accuracy solution problem.
منابع مشابه
Oracle Complexity of Second-Order Methods for Smooth Convex Optimization
Second-order methods, which utilize gradients as well as Hessians to optimize a given function, are of major importance in mathematical optimization. In this work, we study the oracle complexity of such methods, or equivalently, the number of iterations required to optimize a function to a given accuracy. Focusing on smooth and convex functions, we derive (to the best of our knowledge) the firs...
متن کاملComplexity bounds for second-order optimality in unconstrained optimization
This paper examines worst-case evaluation bounds for finding weak minimizers in unconstrained optimization. For the cubic regularization algorithm, Nesterov and Polyak (2006) [15] and Cartis et al. (2010) [3] show that at most O(ε−3) iterations may have to be performed for finding an iterate which is within ε of satisfying second-order optimality conditions. We first show that this bound can be...
متن کاملFirst-order Methods for Geodesically Convex Optimization
Geodesic convexity generalizes the notion of (vector space) convexity to nonlinear metric spaces. But unlike convex optimization, geodesically convex (g-convex) optimization is much less developed. In this paper we contribute to the understanding of g-convex optimization by developing iteration complexity analysis for several first-order algorithms on Hadamard manifolds. Specifically, we prove ...
متن کاملRegularized Newton method for unconstrained convex optimization
We introduce the regularized Newton method (rnm) for unconstrained convex optimization. For any convex function, with a bounded optimal set, the rnm generates a sequence that converges to the optimal set from any starting point. Moreover the rnm requires neither strong convexity nor smoothness properties in the entire space. If the function is strongly convex and smooth enough in the neighborho...
متن کاملSparse Second Order Cone Programming Formulations for Convex Optimization Problems
Second order cone program (SOCP) formulations of convex optimization problems are studied. We show that various SOCP formulations can be obtained depending on how auxiliary variables are introduced. An efficient SOCP formulation that increases the computational efficiency is presented by investigating the relationship between the sparsity of an SOCP formulation and the sparsity of the Schur com...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Journal of Optimization Theory and Applications
سال: 2021
ISSN: ['0022-3239', '1573-2878']
DOI: https://doi.org/10.1007/s10957-021-01930-y